通过流行和通用的计算机视觉挑战来判断,如想象成或帕斯卡VOC,神经网络已经证明是在识别任务中特别准确。然而,最先进的准确性通常以高计算价格出现,需要硬件加速来实现实时性能,而使用案例(例如智能城市)需要实时分析固定摄像机的图像。由于网络带宽的数量,这些流将生成,我们不能依赖于卸载计算到集中云。因此,预期分布式边缘云将在本地处理图像。但是,边缘是由性质资源约束的,这给了可以执行的计算复杂性限制。然而,需要边缘与准确的实时视频分析之间的会面点。专用轻量级型号在每相机基础上可能有所帮助,但由于相机的数量增长,除非该过程是自动的,否则它很快就会变得不可行。在本文中,我们展示并评估COVA(上下文优化的视频分析),这是一个框架,可以帮助在边缘相机中自动专用模型专业化。 COVA通过专业化自动提高轻质模型的准确性。此外,我们讨论和审查过程中涉及的每个步骤,以了解每个人所带来的不同权衡。此外,我们展示了静态相机的唯一假设如何使我们能够制定一系列考虑因素,这大大简化了问题的范围。最后,实验表明,最先进的模型,即能够概括到看不见的环境,可以有效地用作教师以以恒定的计算成本提高较小网络的教师,提高精度。结果表明,我们的COVA可以平均提高预先训练的型号的准确性,平均为21%。
translated by 谷歌翻译
We seek methods to model, control, and analyze robot teams performing environmental monitoring tasks. During environmental monitoring, the goal is to have teams of robots collect various data throughout a fixed region for extended periods of time. Standard bottom-up task assignment methods do not scale as the number of robots and task locations increases and require computationally expensive replanning. Alternatively, top-down methods have been used to combat computational complexity, but most have been limited to the analysis of methods which focus on transition times between tasks. In this work, we study a class of nonlinear macroscopic models which we use to control a time-varying distribution of robots performing different tasks throughout an environment. Our proposed ensemble model and control maintains desired time-varying populations of robots by leveraging naturally occurring interactions between robots performing tasks. We validate our approach at multiple fidelity levels including experimental results, suggesting the effectiveness of our approach to perform environmental monitoring.
translated by 谷歌翻译
Any strategy used to distribute a robot ensemble over a set of sequential tasks is subject to inaccuracy due to robot-level uncertainties and environmental influences on the robots' behavior. We approach the problem of inaccuracy during task allocation by modeling and controlling the overall ensemble behavior. Our model represents the allocation problem as a stochastic jump process and we regulate the mean and variance of such a process. The main contributions of this paper are: Establishing a structure for the transition rates of the equivalent stochastic jump process and formally showing that this approach leads to decoupled parameters that allow us to adjust the first- and second-order moments of the ensemble distribution over tasks, which gives the flexibility to decrease the variance in the desired final distribution. This allows us to directly shape the impact of uncertainties on the group allocation over tasks. We introduce a detailed procedure to design the gains to achieve the desired mean and show how the additional parameters impact the covariance matrix, which is directly associated with the degree of task allocation precision. Our simulation and experimental results illustrate the successful control of several robot ensembles during task allocation.
translated by 谷歌翻译
This paper focuses on the broadcast of information on robot networks with stochastic network interconnection topologies. Problematic communication networks are almost unavoidable in areas where we wish to deploy multi-robotic systems, usually due to a lack of environmental consistency, accessibility, and structure. We tackle this problem by modeling the broadcast of information in a multi-robot communication network as a stochastic process with random arrival times, which can be produced by irregular robot movements, wireless attenuation, and other environmental factors. Using this model, we provide and analyze a receding horizon control strategy to control the statistics of the information broadcast. The resulting strategy compels the robots to re-direct their communication resources to different neighbors according to the current propagation process to fulfill global broadcast requirements. Based on this method, we provide an approach to compute the expected time to broadcast the message to all nodes. Numerical examples are provided to illustrate the results.
translated by 谷歌翻译
Early recognition of clinical deterioration (CD) has vital importance in patients' survival from exacerbation or death. Electronic health records (EHRs) data have been widely employed in Early Warning Scores (EWS) to measure CD risk in hospitalized patients. Recently, EHRs data have been utilized in Machine Learning (ML) models to predict mortality and CD. The ML models have shown superior performance in CD prediction compared to EWS. Since EHRs data are structured and tabular, conventional ML models are generally applied to them, and less effort is put into evaluating the artificial neural network's performance on EHRs data. Thus, in this article, an extremely boosted neural network (XBNet) is used to predict CD, and its performance is compared to eXtreme Gradient Boosting (XGBoost) and random forest (RF) models. For this purpose, 103,105 samples from thirteen Brazilian hospitals are used to generate the models. Moreover, the principal component analysis (PCA) is employed to verify whether it can improve the adopted models' performance. The performance of ML models and Modified Early Warning Score (MEWS), an EWS candidate, are evaluated in CD prediction regarding the accuracy, precision, recall, F1-score, and geometric mean (G-mean) metrics in a 10-fold cross-validation approach. According to the experiments, the XGBoost model obtained the best results in predicting CD among Brazilian hospitals' data.
translated by 谷歌翻译
ICECUBE是一种用于检测1 GEV和1 PEV之间大气和天体中微子的光学传感器的立方公斤阵列,该阵列已部署1.45 km至2.45 km的南极的冰盖表面以下1.45 km至2.45 km。来自ICE探测器的事件的分类和重建在ICeCube数据分析中起着核心作用。重建和分类事件是一个挑战,这是由于探测器的几何形状,不均匀的散射和冰中光的吸收,并且低于100 GEV的光,每个事件产生的信号光子数量相对较少。为了应对这一挑战,可以将ICECUBE事件表示为点云图形,并将图形神经网络(GNN)作为分类和重建方法。 GNN能够将中微子事件与宇宙射线背景区分开,对不同的中微子事件类型进行分类,并重建沉积的能量,方向和相互作用顶点。基于仿真,我们提供了1-100 GEV能量范围的比较与当前ICECUBE分析中使用的当前最新最大似然技术,包括已知系统不确定性的影响。对于中微子事件分类,与当前的IceCube方法相比,GNN以固定的假阳性速率(FPR)提高了信号效率的18%。另外,GNN在固定信号效率下将FPR的降低超过8(低于半百分比)。对于能源,方向和相互作用顶点的重建,与当前最大似然技术相比,分辨率平均提高了13%-20%。当在GPU上运行时,GNN能够以几乎是2.7 kHz的中位数ICECUBE触发速率的速率处理ICECUBE事件,这打开了在在线搜索瞬态事件中使用低能量中微子的可能性。
translated by 谷歌翻译
通用形态(UNIMORPH)项目是一项合作的努力,可为数百种世界语言实例化覆盖范围的标准化形态拐角。该项目包括两个主要的推力:一种无独立的特征架构,用于丰富的形态注释,并以各种语言意识到该模式的各种语言的带注释数据的类型级别资源。本文介绍了过去几年对几个方面的扩张和改进(自McCarthy等人(2020年)以来)。众多语言学家的合作努力增加了67种新语言,其中包括30种濒危语言。我们已经对提取管道进行了一些改进,以解决一些问题,例如缺少性别和马克龙信息。我们还修改了模式,使用了形态学现象所需的层次结构,例如多肢体协议和案例堆叠,同时添加了一些缺失的形态特征,以使模式更具包容性。鉴于上一个UniMorph版本,我们还通过16种语言的词素分割增强了数据库。最后,这个新版本通过通过代表来自metphynet的派生过程的实例丰富数据和注释模式来推动将衍生物形态纳入UniMorph中。
translated by 谷歌翻译
自适应滤波器处于许多信号处理应用的核心,从声噪声繁殖到回声消除,阵列波束形成,信道均衡,以更新的传感器网络应用在监控,目标本地化和跟踪中。沿着该方向的趋势方法是重复到网络内分布式处理,其中各个节点实现适应规则并将它们的估计扩散到网络。当关于过滤方案的先验知识有限或不精确时,选择最适当的过滤器结构并调整其参数变得有挑战性的任务,并且错误的选择可能导致性能不足。为了解决这个困难,一种有用的方法是依赖自适应结构的组合。自适应滤波器的组合在某种程度上利用相同的鸿沟和征服机器学习界(例如,袋装或升级)成功利用的原则。特别地,在不同的视角下,在计算学习领域中研究了组合若干学习算法的输出(专家的混合):而不是研究混合物的预期性能,衍生出适用于各个序列的确定性范围因此,反映了最糟糕的情况。这些界限需要与通常在自适应滤波中使用的那些不同的假设,这是该概述文章的重点。我们审查了这些组合计划背后的关键思想和原则,重点是设计规则。我们还通过各种示例说明了它们的性能。
translated by 谷歌翻译
对行人基础设施,特别是人行道的大规模分析对人类以人为本的城市规划和设计至关重要。受益于通过纽约市开放数据门户提供的Procepetric特征和高分辨率OrthoImages的丰富数据集,我们培养计算机视觉模型来检测遥感图像的人行道,道路和建筑物,达到83%的Miou持有-out测试集。我们应用形状分析技术来研究提取的人行道的不同属性。更具体地,我们对人行道的宽度,角度和曲率进行了瓷砖明智的分析,除了它们对城市地区的可行性和可达性的一般影响,众所周知,在轮椅用户的移动性中具有重要作用。初步结果是有前途的,瞥见了不同城市采用的拟议方法的潜力,使研究人员和从业者可以获得更生动的行人领域的画面。
translated by 谷歌翻译
自动化数据驱动的建模,直接发现系统的管理方程的过程越来越多地用于科学界。 Pysindy是一个Python包,提供用于应用非线性动力学(SINDY)方法的稀疏识别到数据驱动模型发现的工具。在Pysindy的这一主要更新中,我们实现了几种高级功能,使得能够从嘈杂和有限的数据中发现更一般的微分方程。延长候选术语库,用于识别致动系统,部分微分方程(PDE)和隐式差分方程。还实施了包括Sindy和合奏技术的整体形式的强大配方,以提高现实世界数据的性能。最后,我们提供了一系列新的优化算法,包括多元稀疏的回归技术和算法来强制执行和促进不等式约束和稳定性。这些更新在一起,可以在文献中尚未报告的全新SINDY模型发现能力,例如约束PDE识别和使用不同稀疏的回归优化器合并。
translated by 谷歌翻译